Skip to content

Feature Proposal: Scale Buffer #15812

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

elijah-rou
Copy link

Proposal: Introduce a "Scale Buffer", which ensures that n extra service instances are always running.

We have had a number of customers which desire control over the number of instances that are available to serve requests. This is particularly important with GPU instances, which typically can only process a single request at a time. Specifically, they have voiced need for a feature that is able to ensure that n instances are always available to serve requests, above what the autoscale suggests. This ensures that even at low volumes, there is enough capacity.

This proposal introduces a scale-buffer annotation on the service manifest, which statically adds n to the desired pod count above the autoscaler suggestion for a KPA. Even though logic is pretty simple, we currently have this running in our KNative fork and it is working well. I figured it could be useful to others that are also running low-concurrency/no-concurrency workloads in the KNative community. Happy to amend it as needed if the maintainers wish to accept it as a proposal.

Copy link

knative-prow bot commented Mar 16, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: elijah-rou
Once this PR has been reviewed and has the lgtm label, please assign dprotaso for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow knative-prow bot requested review from dprotaso and skonto March 16, 2025 21:48
@knative-prow knative-prow bot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 16, 2025
Copy link

knative-prow bot commented Mar 16, 2025

Hi @elijah-rou. Thanks for your PR.

I'm waiting for a knative member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@dsimansk
Copy link
Contributor

/ok-to-test

@knative-prow knative-prow bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 26, 2025
Copy link

knative-prow bot commented Mar 26, 2025

@elijah-rou: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
unit-tests_serving_main 19fe833 link true /test unit-tests

Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@dprotaso
Copy link
Member

dprotaso commented Apr 2, 2025

Specifically, they have voiced need for a feature that is able to ensure that n instances are always available to serve requests, above what the autoscale suggests. This ensures that even at low volumes, there is enough capacity.

Any chance you can elaborate on the problem more?

@elijah-rou
Copy link
Author

Specifically, they have voiced need for a feature that is able to ensure that n instances are always available to serve requests, above what the autoscale suggests. This ensures that even at low volumes, there is enough capacity.

Any chance you can elaborate on the problem more?

There are many clients of ours that wish to maintain static capacity so that they can handle burst effectively. These are concurrency=1, GPU workload scenarios. At low traffic volumes, they would like n machines to be available to handle load, but cannot rely on purely proportional scaling through the scaling target at low enough volumes. They also do not want to overprovision capacity.

For eg: I know my traffic roughly varies in the 10 node range. ie at any point, 10 more requests may come in on the scaling evaluation interval. If I set my buffer to 10, that means that if I have 0 in-flight requests, I will have 0+10 machines total. If I have 60 in-flight requests, I then have 60+10 machines total. This buffer maintains even as requests grow. This works in tandem with the autoscaler, which is where this becomes useful. Say I have set target_utilization to 90 (as concurrency=1, this means effectively that I want to provision 10% more machines than requests I serve. So if I have 60 inflight here, I would have provisioned 66 total machines. At low volumes, this may not be enough buffer to handle usual traffic growth, so if I introduce a buffer of 10 here, what scale_buffer will do is simply add n statically to the suggested autoscaler amount (in our eg, 60+(6+10) = 76, machines). This ensures at low volumes that I have enough capacity, but at high-volumes I do not overscale by setting scaling_target to a lower number (80%, 70%, etc).

@dprotaso
Copy link
Member

dprotaso commented Apr 5, 2025

Unsure if you're aware we have an activation scale knob - so when scaling from zero it will hit a min number of replicas

See: https://knative.dev/docs/serving/autoscaling/scale-bounds/#scale-up-minimum

Though it will eventually scale down based on the request load - there was a discussion about that here: #14017

@dprotaso
Copy link
Member

dprotaso commented Apr 5, 2025

so that they can handle burst effectively.

We also have target burst capacity knob as well
https://knative.dev/docs/serving/load-balancing/target-burst-capacity/#setting-the-target-burst-capacity

@elijah-rou
Copy link
Author

I am aware of both of these, neither cater for the use case I am proposing. Scale up activation only works from zero, it does not help when we have already scaled. Target burst does not help either, we do not want requests to be queued onto targets. These requests can take anything from 5 minutes to 3 hours, and concurrency is 1 so it will block target burst queued requests. If a request prematurely queues, it can get stuck on an instance indefinitely. Target burst capacity in this instance must be -1 (there is no target burst capacity). This feature predominately addresses concurrency=1 scenarios for GPU workloads.

@skonto
Copy link
Contributor

skonto commented Apr 7, 2025

Hi @elijah-rou, If I understand correctly, the proposal is to keep a buffer of replicas around in order to make sure you have enough capacity to guarantee that latency is acceptable (no queues) and absorb spikes. One request per replica is because of model restriction or resources?

In any case, buffer based capacity management and autoscaling is already a method used elsewhere where capacity is always above demand at any given time. I think it would make more sense to have a buffer percentage and not a static value as it would not depend on traffic patterns. I understand though that replicas are attached to gpus and you want to control the number somehow. Also it would be preferable to be able to to adjust the buffer percentage based on the volatility (changes in variance) and cover changing spike patterns. Here it seems things are static aiming for the buffer that covers the specific workload demands. I guess the goal with the proposal here is to adjust the buffer externally via some tooling per service.

To summarize you want to roughly follow the demand curve and you use (KPA + a static buffer) to implement it.
I think that predictive autoscaling is a better fit (#15068) in this scenario, for certain workloads of course. We added in the past the scale_down_delay option and I think we need another for proactive scale_up_speed_up that would be dynamically adjusted (one idea) so when scaling up you can quickly adjust with the demand that is predicted and not wait for kpa. However, in order not to block this PR we could have it behind a flag and iterate. cc @dprotaso

@knative-prow-robot knative-prow-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 14, 2025
@knative-prow-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@elijah-rou
Copy link
Author

I am impartial to whether we merge this. Entirely depends on if you guys feel users can make use of it or if there is indeed something more suitable (I would be happy for some sort of dynamic percentage based on both volatility & volume as you mentioned instead for eg). I am happy to use this as a starting point to tackle #15068 instead, and I can aim for 1.19 release (since 1.18 is round the corner)

@dprotaso
Copy link
Member

dprotaso commented Apr 18, 2025

I am impartial to whether we merge this.

I think we shouldn't merge and discuss (in the issue) what would have more broader appeal. I think supporting just scale + N is a bit narrow. I don't know what the right answer is though - it might be dependent on domain similar to how there's LLM metric aware routing - https://github.com/kubernetes-sigs/gateway-api-inference-extension

I also don't want to make serving overly complex - the goal is to simplify things for app developers. So there's a balance to be had.

@dprotaso dprotaso closed this Apr 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants